Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Proceedings of SPIE - The International Society for Optical Engineering ; 12626, 2023.
Article in English | Scopus | ID: covidwho-20245242

ABSTRACT

In 2020, the global spread of Coronavirus Disease 2019 exposed entire world to a severe health crisis. This has limited fast and accurate screening of suspected cases due to equipment shortages and and harsh testing environments. The current diagnosis of suspected cases has benefited greatly from the use of radiographic brain imaging, also including X-ray and scintigraphy, as a crucial addition to screening tests for new coronary pneumonia disease. However, it is impractical to gather enormous volumes of data quickly, which makes it difficult for depth models to be trained. To solve these problems, we obtained a new dataset by data augmentation Mixup method for the used chest CT slices. It uses lung infection segmentation (Inf-Net [1]) in a deep network and adds a learning framework with semi-supervised to form a Mixup-Inf-Net semi-supervised learning framework model to identify COVID-19 infection area from chest CT slices. The system depends primarily on unlabeled data and merely a minimal amount of annotated data is required;therefore, the unlabeled data generated by Mixup provides good assistance. Our framework can be used to improve improve learning and performance. The SemiSeg dataset and the actual 3D CT images that we produced are used in a variety of tests, and the analysis shows that Mixup-Inf-Net semi-supervised outperforms most SOTA segmentation models learning framework model in this study, which also enhances segmentation performance. © 2023 SPIE.

2.
Journal of Image and Graphics ; 27(12):3651-3662, 2022.
Article in Chinese | Scopus | ID: covidwho-2203674

ABSTRACT

Objective In order to alleviate the COVID-19 (corona virus disease 2019) pandemic, the initial implementation is focused on targeting and isolating the infectious patients in time. Traditional PCR (polymerase chain reaction) screening method is challenged for the costly and time-consuming problem. Emerging AI (artificial intelligence) -based deep learning networks have been applied in medical imaging for the COVID-19 diagnosis and pathological lung segmentation nowadays. However, current networks are mostly restricted by the experimental datasets with limited number of chest X-ray (CXR) images, and it merely focuses on a single task of diagnosis or segmentation. Most networks are based on the convolution neural network (CNN). However, the convolution operation of CNN is capable to extract local features derived from intrinsic pixels, and has the long-range dependency constraints for explicitly modeling. We develop a vision transformer network (ViTNet). The multi-head attention (MHA) mechanism is guided for long-range dependency model between pixels. Method We built a novel transformer network called ViTNet for diagnosis and segmentation both. The ViTNet is composed of three parts, including dual-path feature embedding, transformer module and segmentation-oriented feature decoder. 1) The embedded dual-path feature is based on two manners for the embedded CXR inputs. One manner is on the basis of 2D convolution with the sliding step equal to convolution kernel size, which divides a CXR to multiple patches and builds an input vector for each patch. The other manner is concerned of a pre-trained feature map (ResNet34-derived) as backbone in terms of deep CXR-based feature extraction. 2) The transformer module is composed of six encoders and one cross-attention module. The 2D-convolution-generated vector sequence is as inputs for transformer encoder. Owing that the encoder inputs are directly extracted from image pixels, they can be considered as the shallow and intuitive feature of CXR. The six encodes are in sequential, transforming the shallow feature to advanced global feature. The cross-attention module is focused on the results obtained by backbone and transformer encoders as inputs, the network can combine the deep feature and encoded shallow feature, and absorb both the global information and the local information in terms of the encoded shallow feature and deep feature, respectively. 3) The feature decoder for segmentation can double the size of feature map and provide the segmentation results. Our network is required to deal with two tasks simultaneously for both of classification and segmentation. A hybrid loss function is employed for their training, which can balance the training efforts between classification and segmentation. The classification loss is the sum of a contrastive loss and a multi-classes cross-entropy loss. The segmentation loss is a binary cross-entropy loss. What is more, a new five-levels CXR dataset is compiled. The dataset samples are based on 2 951 CXRs of COVID-19, 16 964 CXRs of healthy, 6 103 CXRs of bacterial pneumonia, 5 725 CXRs of viral pneumonia, and 6 723 CXRs of opaque lung. In this dataset, COVID-19 CXRs are all labeled with COVID-19 infected lung masks. In our training process, the input images were resized as 448 × 448 pixels, the learning rate is initially set as 2 × 10 - 4 and decreased gradually in a self-adaptive manner, and the total number of iterations is 200, the Adam learning procedure is conducted on four Tesla K80 GPU devices. Result In the classification experiments, we compared ViTNet to a general transformer network and five popular CNN deep-learning models (i. e., Res-Net18, ResNet50, VGG16 (Visual Geometry Group), Inception _ v3, and deep layer aggregation network (DLAN) in terms of overall prediction accuracy, recall rate, F1 and kappa evaluator. It can be demonstrated that our model has the best with 95. 37% accuracy, followed by Inception_ v3 and DLAN with 95. 17% and 94. 40% accuracy, respectively, and the VGG16 is reached 94. 19% ac uracy. For the recall rate, F1 and kappa value, our model has better performance than the rest of networks as well. For the segmentation experiments, ViTNet is in comparison with four commonly-used segmentation networks like pyramid scene parsing network (PSPNet), U-Net, U-Net + and context encoder network (CE-Net) . The evaluation indicators used are derived of the accuracy, sensitivity, specificity, Dice coefficient and area under ROC (region of interest) curve (AUC) . The experimental results show that our model has its potentials in terms of the accuracy and AUC. The second best sensitivity is performed inferior to U-Net + only. More specifically, our model achieved the 95. 96% accuracy, 78. 89% sensitivity, 97. 97% specificity, 98. 55% AUC and a Dice coefficient of 76. 68% . When it comes to the network efficiency, our model speed is 0. 56 s per CXR. In addition, we demonstrate the segmentation results of six COVID-19 CXR images obtained by all the segmentation networks. It is reflected that our model has the best segmentation performance in terms of the illustration of Fig. 5. Our model limitation is to classify a COVID-19 group as healthy group incorrectly, which is not feasible. The PCR method for COVID-19 is probably more trustable than the deep-leaning method, but the feedback duration of tested result typically needs for 1 or 2 days. Conclusion A novel ViTNet method is developed, which achieves the auto-diagnosis on CXR and lung region segmentation for COVID-19 infection simultaneously. The ViTNet has its priority in diagnosis performance and demonstrate its potential segmentation ability. © 2022 Editorial and Publishing Board of JIG. All rights reserved.

3.
9th International Conference on Orange Technology, ICOT 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1752402

ABSTRACT

Suffering globally by COVID-19 since 2020 constrained learner and worker outdoors, of which campus and public area naturally met environmental protection issue. In such cases, a newly AI moveable application, naming Self-Driving Sweeper Bot (SDSB), is invented by intelligently coordinating between self-driving system and sweeper mechanism. In this paper, the perspective on SDSB in terms of human visual knowledge and intelligence between pedestrian security and sweeping efficiency in campus is reported. To reach such a goal, our investigation is shown that human visual knowledge and intelligence, played a critical role requiring routinely collecting and learning visual dataset, accompanied with optimizing procedure by exploring the object recognition methods e.g., CNN, R-CNN, Fast-RCNN and Yolo, for detecting campus objects (including, pedestrians, vehicles, common rubbishes, i.e. fallen leaves, waste papers, plastic bottles etc.), and image segmentation techniques e.g., U-net for constraining sweeping road. In the preliminarily experiments, observation is shown that the factors for object detection and road segmentation in terms of weather, sunshine direction and shadowing/non-shadowing by trees and facilities are highly influencing on SDSB visual intelligence. © 2021 IEEE.

4.
2021 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2021 ; : 2775-2780, 2021.
Article in English | Scopus | ID: covidwho-1699881

ABSTRACT

To control the spread of COVID-19, accurate and efficient diagnosis of suspected cases is the crux of appropriate quarantine and prompt treatment. In addition to the diagnosis of COVID-19, it is vital to distinguish the severity of the confirmed cases, which is conducive to the selection and planning of treatment methods. At a critical juncture of the epidemic crisis, we are committed to developing a non-local hierarchical refinement fully convolutional network to assist experts in automatically segmenting the COVID-19-caused pneumonia infection regions. The architecture of the proposed model is a deep encoder-decoder framework. Specifically, we exploit a non-local perception module to capture more complementary coarse-structure information from different pyramid levels and a local refinement model to explicitly heighten the fine-detail information of each convolution layer. Moreover, the non-local and local features are aggregated through a boundary-aware multiple supervision strategy to produce the gratifying edge-preserving segmentation map. Comprehensive experiments demonstrate the effectiveness of the proposed model in boosting the learning ability of accurately identify the lung infected regions with clear contours. In particular, our model is superior remarkably to the state-of-the-art segmentation models both quantitatively and qualitatively on a real CT dataset of COVID-19. © 2021 IEEE.

5.
10th International Workshop on Clinical Image-Based Procedures, CLIP 2021, 2nd MICCAI Workshop on Distributed and Collaborative Learning, DCL 2021, 1st MICCAI Workshop, LL-COVID19, 1st Secure and Privacy-Preserving Machine Learning for Medical Imaging Workshop and Tutorial, PPML 2021, held in conjunction with 24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021 ; 12969 LNCS:150-159, 2021.
Article in English | Scopus | ID: covidwho-1565297

ABSTRACT

Early detection of the coronavirus disease 2019 (COVID-19) helps to treat patients timely and increase the cure rate, thus further suppressing the spread of the disease. In this study, we propose a novel deep learning based detection and similar case recommendation network to help control the epidemic. Our proposed network contains two stages: the first one is a lung region segmentation step and is used to exclude irrelevant factors, and the second is a detection and recommendation stage. Under this framework, in the second stage, we develop a dual-children network (DuCN) based on a pre-trained ResNet-18 to simultaneously realize the disease diagnosis and similar case recommendation. Besides, we employ triplet loss and intrapulmonary distance maps to assist the detection, which helps incorporate tiny differences between two images and is conducive to improving the diagnostic accuracy. For each confirmed COVID-19 case, we give similar cases to provide radiologists with diagnosis and treatment references. We conduct experiments on a large publicly available dataset (CC-CCII) and compare the proposed model with state-of-the-art COVID-19 detection methods. The results show that our proposed model achieves a promising clinical performance. © 2021, Springer Nature Switzerland AG.

6.
Med Phys ; 48(4): 1633-1645, 2021 Apr.
Article in English | MEDLINE | ID: covidwho-938495

ABSTRACT

OBJECTIVE: Computed tomography (CT) provides rich diagnosis and severity information of COVID-19 in clinical practice. However, there is no computerized tool to automatically delineate COVID-19 infection regions in chest CT scans for quantitative assessment in advanced applications such as severity prediction. The aim of this study was to develop a deep learning (DL)-based method for automatic segmentation and quantification of infection regions as well as the entire lungs from chest CT scans. METHODS: The DL-based segmentation method employs the "VB-Net" neural network to segment COVID-19 infection regions in CT scans. The developed DL-based segmentation system is trained by CT scans from 249 COVID-19 patients, and further validated by CT scans from other 300 COVID-19 patients. To accelerate the manual delineation of CT scans for training, a human-involved-model-iterations (HIMI) strategy is also adopted to assist radiologists to refine automatic annotation of each training case. To evaluate the performance of the DL-based segmentation system, three metrics, that is, Dice similarity coefficient, the differences of volume, and percentage of infection (POI), are calculated between automatic and manual segmentations on the validation set. Then, a clinical study on severity prediction is reported based on the quantitative infection assessment. RESULTS: The proposed DL-based segmentation system yielded Dice similarity coefficients of 91.6% ± 10.0% between automatic and manual segmentations, and a mean POI estimation error of 0.3% for the whole lung on the validation dataset. Moreover, compared with the cases with fully manual delineation that often takes hours, the proposed HIMI training strategy can dramatically reduce the delineation time to 4 min after three iterations of model updating. Besides, the best accuracy of severity prediction was 73.4% ± 1.3% when the mass of infection (MOI) of multiple lung lobes and bronchopulmonary segments were used as features for severity prediction, indicating the potential clinical application of our quantification technique on severity prediction. CONCLUSIONS: A DL-based segmentation system has been developed to automatically segment and quantify infection regions in CT scans of COVID-19 patients. Quantitative evaluation indicated high accuracy in automatic infection delineation and severity prediction.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted , Lung/diagnostic imaging , Tomography, X-Ray Computed , Humans
SELECTION OF CITATIONS
SEARCH DETAIL